Goto

Collaborating Authors

 multi-head external attention


training

Neural Information Processing Systems

RTFormer is consist of several convolution blocks and RTFormerblocks,andRTFormerblockcontains differenttypes of attention. Table 2 shows the performance of RTFormer on ImageNet classification. The first three results of multi-head external attention are with r = [0.125,0.25,1]respectively. As illustrated in Table 3, we can find that multi-head self-attention achieves32.7 mIoU, which performs better than multi-head external attentions with different settings ofr. Multi-head external attention can achieve a good inference speed, which is benefit from its linear complexity and the design of sharing external parameter for multiple heads. However,theperformance ofmulti-headexternal attention is suboptimal, as the network capacity is limited by those designs.


A ImageNet Pre-training Table 1: Training settings on ImageNet classification

Neural Information Processing Systems

Both RTFormer-Slim and RTFormer-Base outperform the corresponding DDRNet variants. The self-attention used for comparison is following (12). For linformer attention, we directly give a result without hyper parameter modification. Multi-head external attention can achieve a good inference speed, which is benefit from its linear complexity and the design of sharing external parameter for multiple heads. "#Params" refers to the number of parameters.